This is the analysis for the project “The Voice Familiarity Benefit for Infant Phoneme Recognition”.
Please note: the variable TestSpeaker (levels: 1, 3, 4) in this analysis document is called Test Speaker (levels: Training Speaker, Novel Speaker, Familiar Non-Training Speaker) in the manuscript. TestSpeaker 1 corresponds to Training Speaker, TestSpeaker3 corresponds to Familiar Non-Training Speaker (TestSpeaker3 corresponds to Speaker4 in the manuscript!) and TestSpeaker4 corresponds to Novel Speaker (TestSpeaker4 corresponds to Speaker3 in the manuscript!). The variable Group (levels: fam, unfam) is called Training Speaker (levels: familiar, unfamiliar) in the accompanying manuscript.
In the accompanying preregistration we explain how we set the priors based on pilot data, and we run prior predictive predictive check, posterior predictive checks, and a sensitivity analysis on simulated data.
All the models in this R Markdown file are knit beforehand with the commented out code (for transparency), and are then read in (for efficiency).
source(here("code/R","00_prepare_data_exp.R"), local = knitr::knit_global())
summary(data_rec)
## Subj MMR Group TestSpeaker VoiceTrain
## 1 : 3 Min. :-24.1012 fam :102 S1:65 Length:200
## 2 : 3 1st Qu.: -3.3563 unfam: 98 S3:66 Class :character
## 3 : 3 Median : 1.2131 S4:69 Mode :character
## 4 : 3 Mean : 0.8331
## 5 : 3 3rd Qu.: 4.9718
## 7 : 3 Max. : 17.7909
## (Other):182
## TestOrder age sex nrSpeakersDaily
## Min. :1143 Min. :-2.2454 Length:200 Min. :-0.5855
## 1st Qu.:1143 1st Qu.:-0.5016 Class :character 1st Qu.:-0.4785
## Median :1413 Median : 0.1959 Mode :character Median :-0.1963
## Mean :1765 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:2243 3rd Qu.: 0.7771 3rd Qu.: 0.1151
## Max. :2423 Max. : 1.4746 Max. : 7.1211
##
## sleepState BlocksAsleep Lab daysVoicetraining
## asleep: 15 Length:200 Length:200 Min. :15.00
## awake :185 Class :character Class :character 1st Qu.:18.00
## Mode :character Mode :character Median :20.00
## Mean :20.42
## 3rd Qu.:21.00
## Max. :28.00
## NA's :3
## Anmerkungen CommentsProtokoll mumDist TrialsStan
## Length:200 Length:200 Min. :-1.5725 Min. :13.00
## Class :character Class :character 1st Qu.:-0.7724 1st Qu.:38.00
## Mode :character Mode :character Median :-0.2059 Median :50.00
## Mean : 0.0000 Mean :46.73
## 3rd Qu.: 0.5657 3rd Qu.:56.00
## Max. : 3.2609 Max. :70.00
##
## TrialsDev
## Min. :10.00
## 1st Qu.:19.75
## Median :24.00
## Mean :23.39
## 3rd Qu.:28.00
## Max. :39.00
##
We based our priors on pilot data, see preregistration:
normal(0,10)giving us:
priors <- c(set_prior("normal(3.5, 20)",
class = "Intercept"),
set_prior("normal(0, 10)",
class = "b"),
set_prior("normal(0, 20)",
class = "sigma"))
Let’s visualize the priors
Here, we check what happens if we run the model based on our priors only.
num_chains <- 4
num_iter <- 4000
num_warmup <- num_iter / 2
num_thin <- 1
priorpredcheck_rec <- brm(MMR ~ 1 + TestSpeaker * Group +
mumDist +
nrSpeakersDaily +
sleepState +
age +
(1 + TestSpeaker | Subj),
data = data_rec,
prior = priors,
family = gaussian(),
control = list(
adapt_delta = .99,
max_treedepth = 15
),
iter = num_iter,
chains = num_chains,
warmup = num_warmup,
thin = num_thin,
cores = num_chains,
seed = project_seed,
file = here("data", "model_output", "R0_model_priorpredcheck_rec.rds"),
file_refit = "on_change",
save_pars = save_pars(all = TRUE),
sample_prior = "only"
)
priorpredcheck_rec <- readRDS(here("data", "model_output", "R0_model_priorpredcheck_rec.rds"))
pp <- posterior_predict(priorpredcheck_rec)
pp <- t(pp)
# distribution of mean MMR
meanMMR = colMeans(pp)
hist(meanMMR, breaks = 40)
summary(priorpredcheck_rec)
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: MMR ~ 1 + TestSpeaker * Group + mumDist + nrSpeakersDaily + sleepState + age + (1 + TestSpeaker | Subj)
## Data: data_rec (Number of observations: 200)
## Draws: 4 chains, each with iter = 4000; warmup = 2000; thin = 1;
## total post-warmup draws = 8000
##
## Group-Level Effects:
## ~Subj (Number of levels: 72)
## Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept) 6.91 7.97 0.20 26.78 1.00
## sd(TestSpeaker1) 6.93 8.58 0.25 26.24 1.00
## sd(TestSpeaker2) 7.03 7.95 0.22 26.91 1.00
## cor(Intercept,TestSpeaker1) 0.00 0.50 -0.87 0.87 1.00
## cor(Intercept,TestSpeaker2) -0.00 0.50 -0.88 0.88 1.00
## cor(TestSpeaker1,TestSpeaker2) 0.00 0.50 -0.88 0.88 1.00
## Bulk_ESS Tail_ESS
## sd(Intercept) 8811 3857
## sd(TestSpeaker1) 9771 4323
## sd(TestSpeaker2) 9444 3302
## cor(Intercept,TestSpeaker1) 13293 5311
## cor(Intercept,TestSpeaker2) 14464 4657
## cor(TestSpeaker1,TestSpeaker2) 7369 5735
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 3.51 20.39 -36.70 42.38 1.00 12956 5465
## TestSpeaker1 -0.20 10.08 -19.84 19.78 1.00 13562 5615
## TestSpeaker2 -0.03 10.12 -19.68 19.84 1.00 14467 5530
## Group1 -0.01 10.09 -19.76 19.72 1.00 14305 5591
## mumDist -0.08 9.99 -19.97 19.51 1.00 12657 5630
## nrSpeakersDaily -0.03 9.99 -19.90 19.70 1.00 16300 6141
## sleepState1 0.05 10.02 -19.76 19.62 1.00 13221 6204
## age 0.02 10.04 -19.54 19.48 1.00 14204 5546
## TestSpeaker1:Group1 0.03 9.94 -19.22 19.64 1.00 14372 5684
## TestSpeaker2:Group1 -0.08 10.16 -20.17 19.94 1.00 14089 6045
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 15.86 12.04 0.71 44.40 1.00 7420 3865
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
In our priors, we specified a mean of 3.5 and an SD of 20. In these
plots and the model summary, we see that the mean of the MMR is
estimated at 3.51 and is normally distributed, with an estimated error
of 20.39. Sigma was specified at normal(0,20) - the
estimate of 15.86 with an estimated error of 12.04 seems reasonable. The
effects on the slope were specified at normal(0,10) which
also seems to be reasonably estimated.
Conclusion: our priors seem to be reasonable.
To check how dependent the model’s estimates are on the priors, we
perform a sensitivity analysis. We will check the estimates of our model
with our proposed priors (intercept: normal(3.5,20), slope
normal(0,10), sigma normal(0,20)), and three
alternative priors:
normal(0,20), slope
normal(0,10): the same SD but the a mean of zeronormal(0,30), slope
normal(0,20): weakly informative, because of the large
SDnormal(0,50), slope
normal(0,40): uninformative, not really biologically
plausible anymorepriors_orig <-
c(set_prior("normal(3.5, 20)",
class = "Intercept"),
set_prior("normal(0, 10)",
class = "b"),
set_prior("normal(0, 20)",
class = "sigma"))
priors1 <-
c(set_prior("normal(0, 20)",
class = "Intercept"),
set_prior("normal(0, 10)",
class = "b"),
set_prior("normal(0, 20)",
class = "sigma"))
priors2 <-
c(set_prior("normal(0, 30)",
class = "Intercept"),
set_prior("normal(0, 20)",
class = "b"),
set_prior("normal(0, 30)",
class = "sigma"))
priors3 <-
c(set_prior("normal(0, 50)",
class = "Intercept"),
set_prior("normal(0, 40)",
class = "b"),
set_prior("normal(0, 50)",
class = "sigma"))
Let’s plot these different priors for the intercept:
We ran the model of the posterior checks (see above) with these three alternative priors, such that we can compare the different estimates. We changed the number of iterations from 4000 to 60000 for these models.
m_orig <- readRDS(here("data", "model_output", "R1_model_sensitivity_rec.rds"))
m_alt_1 <- readRDS(here("data", "model_output", "R1_model_sensitivity_altern1_rec.rds"))
m_alt_2 <- readRDS(here("data", "model_output", "R1_model_sensitivity_altern2_rec.rds"))
m_alt_3 <- readRDS(here("data", "model_output", "R1_model_sensitivity_altern3_rec.rds"))
posterior_summary(m_orig, variable=c("b_Intercept","b_TestSpeaker1", "b_Group1", "sigma"))
## Estimate Est.Error Q2.5 Q97.5
## b_Intercept 1.5049509 0.9604075 -0.3691705 3.393009
## b_TestSpeaker1 -0.2665833 1.3479886 -2.9090724 2.368186
## b_Group1 -0.4635148 0.9893258 -2.4011757 1.481701
## sigma 5.8385328 0.6589253 4.4644272 6.956916
posterior_summary(m_alt_1, variable=c("b_Intercept","b_TestSpeaker1", "b_Group1", "sigma"))
## Estimate Est.Error Q2.5 Q97.5
## b_Intercept 1.4997318 0.9602626 -0.3768944 3.380061
## b_TestSpeaker1 -0.2648054 1.3511708 -2.9214934 2.384943
## b_Group1 -0.4601595 0.9966474 -2.4061822 1.513424
## sigma 5.8312478 0.6423729 4.4404548 6.946174
posterior_summary(m_alt_2, variable=c("b_Intercept","b_TestSpeaker1", "b_Group1", "sigma"))
## Estimate Est.Error Q2.5 Q97.5
## b_Intercept 1.5178559 0.9673583 -0.3738218 3.425500
## b_TestSpeaker1 -0.2756665 1.3631154 -2.9494145 2.401873
## b_Group1 -0.4512327 0.9974925 -2.4093115 1.508571
## sigma 5.8589261 0.6416535 4.5098227 6.973548
posterior_summary(m_alt_3, variable=c("b_Intercept","b_TestSpeaker1", "b_Group1", "sigma"))
## Estimate Est.Error Q2.5 Q97.5
## b_Intercept 1.5176950 0.9746674 -0.3891076 3.436054
## b_TestSpeaker1 -0.2748619 1.3617281 -2.9353698 2.419874
## b_Group1 -0.4523468 1.0029558 -2.4109267 1.522445
## sigma 5.8418571 0.6355697 4.4840384 6.949532
We can now also plot different estimates for the different priors:
As we can see from the (plotted) estimates, the different priors barely have any effect on the estimates of the model. This means that our model is not overly sensitive to the priors.
We set the contrast such that we get equal priors on the different comparisons. This is important for our Bayes Factor analysis later on. The contrasts for the factors with two levels are the same as for deviation coding. However, for the contrast with three levels (TestSpeaker) the contrast is set such that all pairwise comparisons have the same priors on the effect. If we don’t this, the model will automatically assign different priors to the effects of each level, which can introduce implicit bias and skew the estimates based on the data’s characteristics.
contrasts(data_rec$TestSpeaker) <- contr.equalprior_pairs
contrasts(data_rec$Group) <- contr.equalprior_pairs
contrasts(data_rec$sleepState) <- contr.equalprior_pairs
contrasts(data_rec$TestSpeaker)
contrasts(data_rec$Group)
contrasts(data_rec$sleepState)
We also scaled the continuous predictors.
dat_exp_rec$mumDist<- scale(dat_exp_rec$mumDist)
dat_exp_rec$age <- scale(dat_exp_rec$age)
dat_exp_rec$nrSpeakersDaily <- scale(dat_exp_rec$nrSpeakersDaily)
We run our model and have a look:
num_chains <- 4 # number of chains = number of processor cores
num_iter <- 80000 # number of samples per chain: because I use Savage-Dickey for hypothesis testing, so we need a LOT of samples
num_warmup <- num_iter / 2 # number of warm-up samples per chain
num_thin <- 1 # thinning: extract one out of x samples per chain
model_rec = brm(MMR ~ 1 + TestSpeaker * Group +
mumDist +
nrSpeakersDaily +
sleepState +
age +
(1 + TestSpeaker | Subj),
data = data_rec,
prior = priors,
family = gaussian(),
control = list(
adapt_delta = .99,
max_treedepth = 15
),
iter = num_iter,
chains = num_chains,
warmup = num_warmup,
thin = num_thin,
cores = num_chains,
seed = project_seed,
file = "R2_model_rec.rds",
file_refit = "on_change",
save_pars = save_pars(all = TRUE)
)
model_rec <- readRDS(here("data", "model_output", "R2_model_rec.rds"))
summary(model_rec)
## Warning: There were 4 divergent transitions after
## warmup. Increasing adapt_delta above 0.99 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: MMR ~ 1 + TestSpeaker * Group + mumDist + nrSpeakersDaily + sleepState + age + (1 + TestSpeaker | Subj)
## Data: data_rec (Number of observations: 200)
## Draws: 4 chains, each with iter = 80000; warmup = 40000; thin = 1;
## total post-warmup draws = 160000
##
## Group-Level Effects:
## ~Subj (Number of levels: 72)
## Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept) 2.03 0.92 0.19 3.70 1.00
## sd(TestSpeaker1) 5.35 2.28 0.57 9.39 1.00
## sd(TestSpeaker2) 2.23 1.54 0.10 5.72 1.00
## cor(Intercept,TestSpeaker1) -0.26 0.37 -0.88 0.61 1.00
## cor(Intercept,TestSpeaker2) -0.22 0.45 -0.91 0.76 1.00
## cor(TestSpeaker1,TestSpeaker2) 0.25 0.45 -0.75 0.92 1.00
## Bulk_ESS Tail_ESS
## sd(Intercept) 13181 18690
## sd(TestSpeaker1) 12286 21091
## sd(TestSpeaker2) 16482 12126
## cor(Intercept,TestSpeaker1) 42313 44063
## cor(Intercept,TestSpeaker2) 81076 88444
## cor(TestSpeaker1,TestSpeaker2) 83186 91452
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 1.50 0.96 -0.39 3.39 1.00 139984 122864
## TestSpeaker1 -0.26 1.35 -2.90 2.40 1.00 121689 122015
## TestSpeaker2 -1.24 1.11 -3.44 0.95 1.00 181526 121364
## Group1 -0.46 0.99 -2.41 1.49 1.00 135343 119984
## mumDist -0.77 0.56 -1.88 0.32 1.00 120956 118558
## nrSpeakersDaily 0.34 0.49 -0.61 1.31 1.00 142487 122709
## sleepState1 -1.60 1.94 -5.39 2.20 1.00 139282 123757
## age 0.46 0.50 -0.52 1.45 1.00 138475 120675
## TestSpeaker1:Group1 -5.53 2.40 -10.25 -0.84 1.00 139258 118222
## TestSpeaker2:Group1 2.51 2.11 -1.66 6.64 1.00 195727 124240
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 5.85 0.64 4.50 6.96 1.00 9667 8393
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
There were 4 divergent transitions after warmup. However, the model seems to have converged, since the Rhat and ESS values are very good (cf. https://mc-stan.org/rstan/reference/Rhat.html).
We will plot the traces, to check model convergence:
plot(model_rec, ask = FALSE)
Here we see that the traces all look like nice hairy caterpillars, meaning that the chains have mixed (e.g., https://nicholasrjenkins.science/post/bda_in_r/bayes_with_brms/). The posterior distributions also look good.
Now we check whether the model is able to retrieve the underlying data. y is the observed data, so the data that we inputted, and y’ is the simulated data from the posterior predictive distribution.
pp_check(model_rec, ndraws=50)
This looks good: the data overlap.
We now check the posterior samples of the posterior predictive distribution, for the estimates of S1_fam, S2_fam, S3_fam, S1_unfam, S2_unfam, and S3_unfam.
data_rec <-
data_rec %>%
unite( # unite columns for posterior predictive checks
# unites the two columns TestSpeaker and Group Because brms made a posterior distribution
# with TestSpeaker_Group, because it looks at an interaction.
"TestSpeaker_Group",
c("TestSpeaker", "Group"),
sep = "_",
remove = FALSE,
na.rm = FALSE
) %>%
select(Subj, TestSpeaker_Group, TestSpeaker, Group, MMR)
posterior_predict_model_rec <-
model_rec %>%
posterior_predict(ndraws = 2000)
PPS_rec <-
posterior_predict_model_rec %>%
ppc_stat_grouped(
y = pull(data_rec, MMR),
group = pull(data_rec, TestSpeaker_Group),
stat = "mean"
) +
ggtitle("Posterior predictive samples")
PPS_rec
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
y is the mean of the data that we put in the model. Yrep is the posterior predicted samples from our posterior distribution. We see that our actual mean is in the middle of the predicted distribution, and that the predicted distribution is a normal distribution, which is what we expect.
Now we want to test our hypotheses. Our main research question is: Is there a Voice Familiarity Benefit for phoneme recognition in infants?
Additional to this main questions, we will test different contrast, to address which theoretical account on the interaction between speech and voice information can best explain this benefit.
Hypothesis 1 (directional):
For our main RQ,
we expect that infants discriminate a vowel contrast better in
groupFam-TestSpeaker1 than in groupUnfam-TestSpeaker4, shown by a more
negative mismatch response (MMR; our dependent variable) in
groupFam-TestSpeaker1.
Hypothesis 2 (directional):
Testing the
acoustic account, we expect that infants discriminate a vowel contrast
better in groupUnfam-TestSpeaker3 than in groupUnfam-TestSpeaker4, shown
by a more negative MMR in groupUnfam-TestSpeaker3.
Hypothesis 3 (directional):
Testing the
exemplar account, we expect that infants discriminate a vowel contrast
better in groupUnfam-TestSpeaker1 than in groupUnfam-TestSpeaker4, shown
by a more negative MMR in groupUnfam-TestSpeaker1.
Hypothesis 4 (non-directional):
To test
whether whether the 5-minute exposure with target vowel exemplars or
naturalistic exposure to the speaker’s acoustic characteristics is more
beneficial for vowel discrimination, we test the difference between
groupUnfam-TestSpeaker1 than in groupUnfam-TestSpeaker3. The exemplar
account would predict that infants discriminate the contrast better in
groupUnfam-TestSpeaker1, shown by a more negative MMR in
groupUnfam-TestSpeaker1, whereas the acoustic account predicts that
infants discriminate the contrast better in groupUnfam-TestSpeaker3,
shown by a more negative MMR in groupUnfam-TestSpeaker3.
We thus want the following contrasts:
1. groupFam-TestSpeaker1
vs. groupUnfam-TestSpeaker4
2. groupUnfam-TestSpeaker3
vs. groupUnfam-TestSpeaker4
3. groupUnfam-TestSpeaker1
vs. groupUnfam-TestSpeaker4
4. groupUnfam-TestSpeaker1
vs. groupUnfam-TestSpeaker3
If we find a more negative MMR for groupUnfam-TestSpeaker3 than for
groupUnfam-TestSpeaker1, this could have to do with the order in the
experiment: for pragmatic reasons, TestSpeaker3 was always tested last,
which might lead to learning effects. We can, however, test the contrast
(unfam1 - unfam3) - (fam1 - fam3). If this different is
significant, it shows that the order in the experiment is not only
because of experimental order. The logic here is as follows: if unfam3
is more negative than unfam1, this would suggest that acoustic
familiarity with the speaker (unfam3) is more helpful for vowel
discrimination than having heard exemplars of the speaker without being
acoustically familiar (unfam1). As mentioned above, the better
discrimination for unfam3 compared to unfam1 could however also be
because of experimental order. Yet, if we compare fam1 with fam3, we
have a different comparison: in fam3 the infant has, just like unfam3,
acoustic familiarity with the speaker but did not hear any exemplars.
However, whereas unfam1 only heard exemplars without acoustic
familiarity, fam1 has both examplars and acoustic
familiarity. Therefore, the effect of acoustic familiarity is also there
for fam1. We thus expect that if we find better discrimination for
unfam3 compared to unfam1, this effect disappears for fam3 vs. fam1.
For our analyses we will use Bayes Factors (BF). However, as a robustness check, and to be able to compare our results with other experiments that use frequentist methods, we will also compute MAP-Based p-Value (pMAP; https://easystats.github.io/bayestestR/reference/p_map.html). This is the Bayesian equivalent of the p-value, and is related to the odds that a parameter (described by its posterior distribution) has against the null hypothesis (h0) using Mills’ (2014, 2017) Objective Bayesian Hypothesis Testing framework. It corresponds to the density value at 0 divided by the density at the Maximum A Posteriori (MAP). Caveats the p_MAP: 1) p_MAP allows to assess the presence of an effect, not its magnitude or importance (https://easystats.github.io/bayestestR/articles/probability_of_direction.html) 2) p_MAP is sensitive only to the amount of evidence for the alternative hypothesis (i.e., when an effect is truly present). It is not able to reflect the amount of evidence in favor of the null hypothesis. A high value suggests that the effect exists, but a low value indicates uncertainty regarding its existence (but not certainty that it is non-existent) (https://doi.org/10.3389/fpsyg.2019.02767).
Bayes Factors (https://doi.org/10.3389/fpsyg.2019.02767). The Bayes factor (BF) indicates the degree by which the mass of the posterior distribution has shifted further away from or closer to the null value (0) relative to the prior distribution, thus indicating if the null hypothesis has become less or more likely given the observed data. The BF is computed as a Savage-Dickey density ratio, which is also an approximation of a Bayes factor comparing the marginal likelihoods of the model against a model in which the tested parameter has been restricted to the point-null (Wagenmakers et al., 2010).
We will use the following contrast matrix to test our contrasts:
# make custom contrasts (from https://aosmith.rbind.io/2019/04/15/custom-contrasts-emmeans/)
fam1 = c(1,0,0,0,0,0)
unfam1 = c(0,1,0,0,0,0)
fam2 = c(0,0,1,0,0,0)
unfam2 = c(0,0,0,1,0,0)
fam3 = c(0,0,0,0,1,0)
unfam3 = c(0,0,0,0,0,1)
custom_contrasts <- list(
list("unfam2-fam1" = unfam2 - fam1),
list("unfam2-unfam3" = unfam2 - unfam3),
list("unfam2-unfam1" = unfam2 - unfam1),
list("unfam1-unfam3" = unfam1 - unfam3)
# ,list("(unfam1-unfam3)-(fam1-fam3)" = (unfam1 - unfam3) - (fam1 - fam3)) # this is the interaction that, if significant, shows us that a possible effect of unfam3 being more negative than unfam1 is not solely because of order in experiment.
)
We will use the following code to compute the pMAP:
## pMAP Effects Custom contrasts ------------------------
pMAP_customcontrasts_rec =
model_rec %>%
emmeans(~ Group:TestSpeaker) %>%
contrast(method = custom_contrasts) %>%
p_map() %>%
mutate(
"p < .05" = ifelse(p_MAP < .05, "*", "")
)
pMAP_customcontrasts_rec
We will use the following code to compute the Bayes Factors:
model_rec_prior <- unupdate(model_rec) # sample priors from model
## Effects Custom contrasts ------------------------
# pairwise comparisons of prior distributions
customcontrasts_pairwise_prior_rec =
model_rec_prior %>%
emmeans(~ Group:TestSpeaker) %>% # estimated marginal means
contrast(method = custom_contrasts)
# pairwise comparisons of posterior distributions
customcontrasts_pairwise_rec <-
model_rec %>%
emmeans(~ Group:TestSpeaker) %>%
contrast(method = custom_contrasts)
# Bayes Factors (Savage-Dickey density ratio)
BF_customcontrasts_rec <-
customcontrasts_pairwise_rec %>%
bf_parameters(prior = customcontrasts_pairwise_prior_rec) %>%
arrange(log_BF) # sort according to BF
# add rule-of-thumb interpretation
BF_customcontrasts_rec <-
BF_customcontrasts_rec %>%
add_column(
"interpretation" = interpret_bf(
BF_customcontrasts_rec$log_BF,
rules = "raftery1995",
log = TRUE,
include_value = TRUE,
protect_ratio = TRUE,
exact = TRUE
),
.after = "log_BF"
)
BF_customcontrasts_rec
Let’s have a look at the output:
pMAP_BF_acq <- readRDS(here("data", "hypothesis_testing", "pMAP_BF_rec.rds"))
pMAP_BF_acq
## # A tibble: 14 × 5
## Parameter p_MAP `p < .05` log_BF interpretation
## <chr> <dbl> <chr> <dbl> <chr>
## 1 b_Intercept 0.296 "" -1.85 positive evidence (BF = 1/6.38…
## 2 b_TestSpeaker1 0.980 "" -2.00 positive evidence (BF = 1/7.39…
## 3 b_TestSpeaker2 0.523 "" -1.57 positive evidence (BF = 1/4.79…
## 4 b_Group1 0.900 "" -2.21 positive evidence (BF = 1/9.08…
## 5 b_mumDist 0.392 "" -1.94 positive evidence (BF = 1/6.98…
## 6 b_nrSpeakersDaily 0.778 "" -2.77 positive evidence (BF = 1/15.9…
## 7 b_sleepState1 0.709 "" -1.30 positive evidence (BF = 1/3.68…
## 8 b_age 0.640 "" -2.57 positive evidence (BF = 1/13.1…
## 9 b_TestSpeaker1:Group1 0.0713 "" 1.20 positive evidence (BF = 3.31) …
## 10 b_TestSpeaker2:Group1 0.478 "" -0.844 weak evidence (BF = 1/2.33) ag…
## 11 unfam4-fam1 0.939 "" -2.13 positive evidence (BF = 1/8.45…
## 12 unfam4-unfam3 0.238 "" -0.399 weak evidence (BF = 1/1.49) ag…
## 13 unfam4-unfam1 0.640 "" -1.49 positive evidence (BF = 1/4.46…
## 14 unfam1-unfam3 0.648 "" -1.50 positive evidence (BF = 1/4.48…
We do not find effects for our preset contrasts. We thus also do not perform a sensitivity analysis for these contrasts.
In our preregistration, we chose to compare the different testspeakers in the unfamiliar training speaker group, because this allowed the comparison “groupFam-TestSpeaker1 vs. groupUnfam-TestSpeaker4”, which we found interesting because it compared the most familiar speaker (the speaker that was heard during the voice familiarization as well as during the phoneme test) with the least familiar speaker (the novel speaker for the group that was not trained with a familiar speaker). In this reasoning, we assumed that we would find a voice familiarity benefit for phoneme acquisition: that learning from a familiar speaker would improve later discrimination. Therefore, we chose this contrast to test the exisitence of a general voice familiarity benefit for phoneme recognition. However, for a novel test speaker, we found the opposite effect: training with an unfamiliar speaker benefitted generalization tot a novel speaker.
Therefore, to explore whether there is a voice familiarity benefit for phoenem recognition, we will look at the contrasts within the group Familiar Training speaker. These infants have all been trained with the familiar speaker, and we moreover have more power because this is a within-subject comparison.
In this exploratory analysis, we thus look at the remaining
contrasts:
1. groupFam-TestSpeaker1 vs. groupFam-TestSpeaker4
2. groupFam-TestSpeaker1 vs. groupFam-TestSpeaker3
3.
groupFam-TestSpeaker4 vs. groupFam-TestSpeaker3
Mostly, we as interested whether we find a difference between “groupFam-TestSpeaker1 vs. groupFam-TestSpeaker4”: the training speaker vs. the novel speaker.
We tested this contrasts by running the same model, with the same priors and parameters, as for the preregistered analysis (without the factor Group, since we only test within-subjects). We both looked a the preregistered ROI as well as at a narrower frontal ROI (excluding C3, Cz, C4).
priors <- c(set_prior("normal(3.5, 20)", class = "Intercept"),
set_prior("normal(0, 10)", class = "b"),
set_prior("normal(0, 20)", class = "sigma"))
# Function to fit the model ----------------------------------------------------
fit_model <- function(data, output_file) {
brm(MMR ~ 1 + TestSpeaker +
mumDist +
nrSpeakersDaily +
sleepState +
age +
(1 + TestSpeaker | Subj),
data = data,
prior = priors,
family = gaussian(),
control = list(
adapt_delta = .99,
max_treedepth = 15
),
iter = num_iter,
chains = num_chains,
warmup = num_warmup,
thin = num_thin,
cores = num_chains,
seed = project_seed,
file = output_file,
file_refit = "on_change",
save_pars = save_pars(all = TRUE)
)
}
datasets <- list(
list(data = data_recfam_normal, output_file = here("data", "model_output", "Rfam2_model_recfam_normal.rds")),
list(data = data_recfam_frontal, output_file = here("data", "model_output", "Rfam2_model_recfam_frontal.rds"))
)
for (dataset in datasets) {
fit_model(dataset$data, dataset$output_file)
}
We conduct the same model diagnostics as above We will plot the traces, to check model convergence:
model_recfam_n <- readRDS(here("data", "model_output", "Rfam2_model_recfam_normal.rds"))
model_recfam_f <- readRDS(here("data", "model_output", "Rfam2_model_recfam_frontal.rds"))
plot(model_recfam_n, ask = FALSE)
plot(model_recfam_f, ask = FALSE)
Here we see that the traces all look like nice hairy caterpillars, meaning that the chains have mixed (e.g., https://nicholasrjenkins.science/post/bda_in_r/bayes_with_brms/). The posterior distributions also look good.
Now we check whether the model is able to retrieve the underlying data. y is the observed data, so the data that we inputted, and y’ is the simulated data from the posterior predictive distribution.
pp_check(model_recfam_n, ndraws=50)
pp_check(model_recfam_f, ndraws=50)
This looks good for both models: the data overlap.
We now check the posterior samples of the posterior predictive distribution, for the estimates of each testspeaker
data_recfam_modeldiag_n <-
data_recfam_normal %>%
select(Subj, TestSpeaker, MMR)
posterior_predict_model_rec_n <-
model_recfam_n %>%
posterior_predict(ndraws = 2000)
PPS_rec_n <-
posterior_predict_model_rec_n %>%
ppc_stat_grouped(
y = pull(data_recfam_modeldiag_n, MMR),
group = pull(data_recfam_modeldiag_n, TestSpeaker),
stat = "mean"
) +
ggtitle("Posterior predictive samples, ROI prereg")
PPS_rec_n
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
data_recfam_modeldiag_f <-
data_recfam_frontal %>%
select(Subj, TestSpeaker, MMR)
posterior_predict_model_rec_f <-
model_recfam_f %>%
posterior_predict(ndraws = 2000)
PPS_rec_f <-
posterior_predict_model_rec_f %>%
ppc_stat_grouped(
y = pull(data_recfam_modeldiag_f, MMR),
group = pull(data_recfam_modeldiag_f, TestSpeaker),
stat = "mean"
) +
ggtitle("Posterior predictive samples, frontal ROI")
PPS_rec_f
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
y is the mean of the data that we put in the model. Yrep is the posterior predicted samples from our posterior distribution. We see that our actual mean is in the middle of the predicted distribution, and that the predicted distribution is a normal distribution, which is what we expect.
We computed the pMAP and BF with the same code as for the preregistered model, and adapted for our contrasts of interest:
fam1 = c(1,0,0)
fam3 = c(0,1,0)
fam4 = c(0,0,1)
custom_contrasts <- list(
list("fam1-fam4" = fam1 - fam4),
list("fam3-fam4" = fam3 - fam4),
list("fam1-fam3" = fam1 - fam3)
)
Let’s have a look at the output for the preregistered ROI:
pMAP_BF_recfam_normal = readRDS(here("data", "hypothesis_testing", "pMAP_BF_recfam_normal.rds"))
pMAP_BF_recfam_normal
## # A tibble: 10 × 5
## Parameter p_MAP `p < .05` log_BF interpretation
## <chr> <dbl> <chr> <dbl> <chr>
## 1 b_Intercept 0.229 "" -1.33 positive evidence (BF = 1/3.80) ag…
## 2 b_TestSpeaker1 0.263 "" -0.260 weak evidence (BF = 1/1.30) against
## 3 b_TestSpeaker2 0.171 "" -0.179 weak evidence (BF = 1/1.20) against
## 4 b_mumDist 0.966 "" -2.48 positive evidence (BF = 1/11.95) a…
## 5 b_nrSpeakersDaily 0.647 "" -2.26 positive evidence (BF = 1/9.63) ag…
## 6 b_sleepState1 0.580 "" -0.825 weak evidence (BF = 1/2.28) against
## 7 b_age 0.823 "" -2.39 positive evidence (BF = 1/10.91) a…
## 8 fam1-fam4 0.0559 "" 1.10 positive evidence (BF = 3.02) in f…
## 9 fam3-fam4 0.263 "" -0.279 weak evidence (BF = 1/1.32) against
## 10 fam1-fam3 0.886 "" -1.75 positive evidence (BF = 1/5.74) ag…
Let’s have a look at the output for the frontal ROI:
pMAP_BF_recfam_frontal = readRDS(here("data", "hypothesis_testing", "pMAP_BF_recfam_frontal.rds"))
pMAP_BF_recfam_frontal
## # A tibble: 10 × 5
## Parameter p_MAP `p < .05` log_BF interpretation
## <chr> <dbl> <chr> <dbl> <chr>
## 1 b_Intercept 0.249 "" -1.38 positive evidence (BF = 1/3.96) ag…
## 2 b_TestSpeaker1 0.376 "" -0.605 weak evidence (BF = 1/1.83) against
## 3 b_TestSpeaker2 0.0464 "*" 1.13 positive evidence (BF = 3.11) in f…
## 4 b_mumDist 0.995 "" -2.46 positive evidence (BF = 1/11.65) a…
## 5 b_nrSpeakersDaily 0.531 "" -2.01 positive evidence (BF = 1/7.50) ag…
## 6 b_sleepState1 0.596 "" -0.798 weak evidence (BF = 1/2.22) against
## 7 b_age 0.575 "" -1.98 positive evidence (BF = 1/7.25) ag…
## 8 fam1-fam4 0.0240 "*" 2.05 positive evidence (BF = 7.79) in f…
## 9 fam3-fam4 0.376 "" -0.603 weak evidence (BF = 1/1.83) against
## 10 fam1-fam3 0.536 "" -1.15 positive evidence (BF = 1/3.17) ag…
As a last step, we will perform a sensitivity analysis for the BFs for the significant effect, to check in how far our BFs are dependent or our estimated effect size in our prior.
The code we used for the sensitivity analysis:
# set values ----------------------------
num_chains <- 4
num_iter <- 80000
num_warmup <- num_iter / 2
num_thin <- 1
# Run the model with different sds
prior_sd <- c(1, 5, 8, 10, 15, 20, 30, 40, 50)
# ROI normal ----------
# Run the models
for (i in 1:length(prior_sd)) {
psd <- prior_sd[i]
fit <- brm(MMR ~ 1 + TestSpeaker +
mumDist +
nrSpeakersDaily +
sleepState +
age +
(1 + TestSpeaker | Subj),
data = data_recfam_normal,
prior = c(
set_prior("normal(3.5, 20)", class = "Intercept"),
set_prior(paste0("normal(0,", psd, ")"), class = "b"),
set_prior("normal(0, 20)", class = "sigma")
),
family = gaussian(),
control = list(
adapt_delta = .99,
max_treedepth = 15
),
iter = num_iter,
chains = num_chains,
warmup = num_warmup,
thin = num_thin,
cores = num_chains,
seed = project_seed,
save_pars = save_pars(all = TRUE),
file=here("data", "sensitivity_analysis", paste0("Rfam5_normal_sensAnal_BF_priorsd_", psd, ".rds"))
)
}
# Custom contrasts
fam1 = c(1,0,0)
fam3 = c(0,1,0)
fam4 = c(0,0,1)
custom_contrasts <- list(
list("fam1-fam4" = fam1 - fam4),
list("fam3-fam4" = fam3 - fam4),
list("fam1-fam3" = fam1 - fam3)
)
## BFs-------------------------
BF <- c()
for (i in 1:length(prior_sd)) {
psd <- prior_sd[i]
fit <- readRDS(here("data", "sensitivity_analysis", paste0("Rfam5_normal_sensAnal_BF_priorsd_", psd, ".rds")))
fit_priors <- unupdate(fit)
m_prior <- fit_priors %>%
emmeans(~ TestSpeaker) %>%
contrast(method = custom_contrasts)
m_post <- fit %>%
emmeans(~ TestSpeaker) %>%
contrast(method = custom_contrasts)
BF_current <- bf_parameters(m_post, prior = m_prior) %>%
filter(Parameter == "fam1-fam4")
BF_current <- as.numeric(BF_current)
BF <- c(BF, BF_current)
}
res <- data.frame(prior_sd, BF, logBF = log10(BF))
save(res, file = here("data", "sensitivity_analysis", "Rfam5_normal_sensAnal_BF_fam1-fam4.rda"))
## Plot --------------------------------
breaks <- c(1 / 100, 1 / 50, 1 / 20, 1 / 10,1 / 5, 1 / 3,1, 3, 5, 10, 20, 50, 100)
p = ggplot(res, aes(x = prior_sd, y = BF)) +
geom_point(size = 2) +
geom_line() +
geom_hline(yintercept = 1, linetype = "dashed") +
scale_x_continuous("Normal prior width (SD)\n") +
scale_y_log10(expression("BF"[10]), breaks = breaks, labels = MASS::fractions(breaks)) +
coord_cartesian(ylim = c(1 / 100, 100), xlim = c(0, tail(prior_sd, n = 1))) +
annotate("text", x = 40, y = 30, label = expression("Evidence in favor of H"[1]), size = 5) +
annotate("text", x = 40, y = 1 / 30, label = expression("Evidence in favor of H"[0]), size = 5) +
theme(axis.text.y = element_text(size = 8)) +
# ggtitle("Bayes factors for contrast fam1-fam4 (ROI normal") +
theme_apa()
# ROI frontal ----------
# Run the models
for (i in 1:length(prior_sd)) {
psd <- prior_sd[i]
fit <- brm(MMR ~ 1 + TestSpeaker +
mumDist +
nrSpeakersDaily +
sleepState +
age +
(1 + TestSpeaker | Subj),
data = data_recfam_frontal,
prior = c(
set_prior("normal(3.5, 20)", class = "Intercept"),
set_prior(paste0("normal(0,", psd, ")"), class = "b"),
set_prior("normal(0, 20)", class = "sigma")
),
family = gaussian(),
control = list(
adapt_delta = .99,
max_treedepth = 15
),
iter = num_iter,
chains = num_chains,
warmup = num_warmup,
thin = num_thin,
cores = num_chains,
seed = project_seed,
save_pars = save_pars(all = TRUE),
file=here("data", "sensitivity_analysis", paste0("Rfam5_frontal_sensAnal_BF_priorsd_", psd, ".rds"))
)
}
# Custom contrasts
fam1 = c(1,0,0)
fam3 = c(0,1,0)
fam4 = c(0,0,1)
custom_contrasts <- list(
list("fam1-fam4" = fam1 - fam4),
list("fam3-fam4" = fam3 - fam4),
list("fam1-fam3" = fam1 - fam3)
)
## BFs-------------------------
BF <- c()
for (i in 1:length(prior_sd)) {
psd <- prior_sd[i]
fit <- readRDS(here("data", "sensitivity_analysis", paste0("Rfam5_frontal_sensAnal_BF_priorsd_", psd, ".rds")))
fit_priors <- unupdate(fit)
m_prior <- fit_priors %>%
emmeans(~ TestSpeaker) %>%
contrast(method = custom_contrasts)
m_post <- fit %>%
emmeans(~ TestSpeaker) %>%
contrast(method = custom_contrasts)
BF_current <- bf_parameters(m_post, prior = m_prior) %>%
filter(Parameter == "fam1-fam4")
BF_current <- as.numeric(BF_current)
BF <- c(BF, BF_current)
}
res <- data.frame(prior_sd, BF, logBF = log10(BF))
save(res, file = here("data", "sensitivity_analysis", "Rfam5_frontal_sensAnal_BF_fam1-fam4.rda"))
## Plot --------------------------------
breaks <- c(1 / 100, 1 / 50, 1 / 20, 1 / 10,1 / 5, 1 / 3,1, 3, 5, 10, 20, 50, 100)
p = ggplot(res, aes(x = prior_sd, y = BF)) +
geom_point(size = 2) +
geom_line() +
geom_hline(yintercept = 1, linetype = "dashed") +
scale_x_continuous("frontal prior width (SD)\n") +
scale_y_log10(expression("BF"[10]), breaks = breaks, labels = MASS::fractions(breaks)) +
coord_cartesian(ylim = c(1 / 100, 100), xlim = c(0, tail(prior_sd, n = 1))) +
annotate("text", x = 40, y = 30, label = expression("Evidence in favor of H"[1]), size = 5) +
annotate("text", x = 40, y = 1 / 30, label = expression("Evidence in favor of H"[0]), size = 5) +
theme(axis.text.y = element_text(size = 8)) +
# ggtitle("Bayes factors for contrast fam1-fam4 (ROI frontal") +
theme_apa()
Let’s have a look at the plots:
## Warning in is.na(x): is.na() applied to non-(list or vector) of type
## 'expression'
## Warning in is.na(x): is.na() applied to non-(list or vector) of type
## 'expression'
## Warning in is.na(x): is.na() applied to non-(list or vector) of type
## 'expression'
## Warning in is.na(x): is.na() applied to non-(list or vector) of type
## 'expression'
What we expect to see is that if we assume a very low effect size (SD is low), we have support for H1, but if we assume a very big effect size (SD is very high) we find support for the H0. This is because if we assume a very low effect size (i.e., a narrow prior with low standard deviation), we are essentially saying that we expect the true effect size to be close to zero. In this case, if the observed data shows an effect that is consistent with this narrow prior, we will likely find support for the alternative hypothesis (H1). On the other hand, if we assume a very high effect size (i.e., a broad prior with high standard deviation), we are indicating that we expect a wide range of possible effect sizes, including very large ones. In this case, if the observed data does not show an effect size as large as the broad prior suggests, the Bayes Factor may favor the null hypothesis (H0), as the data is not providing strong evidence for such a large effect.
Indeed, this plot shows us exactly that: the higher the sd for the prior, the less evidence we find for our H1. However, this effect for the frontal ROI appears to be very robust, since even with a very large sd of the prior (of 50), we still don’t find evidence for the H0. For the preregistered ROI, the effect is slightly less robust: the evidence in favor of H1 is gone for an sd for the prior from 30 upwards.
sessioninfo::session_info()
## ─ Session info ───────────────────────────────────────────────────────────────
## setting value
## version R version 4.2.2 (2022-10-31)
## os macOS Big Sur ... 10.16
## system x86_64, darwin17.0
## ui X11
## language (EN)
## collate en_US.UTF-8
## ctype en_US.UTF-8
## tz Europe/Berlin
## date 2025-11-20
## pandoc 2.19.2 @ /Applications/RStudio.app/Contents/MacOS/quarto/bin/tools/ (via rmarkdown)
##
## ─ Packages ───────────────────────────────────────────────────────────────────
## package * version date (UTC) lib source
## abind 1.4-5 2016-07-21 [2] CRAN (R 4.2.0)
## askpass 1.1 2019-01-13 [2] CRAN (R 4.2.0)
## assertthat 0.2.1 2019-03-21 [2] CRAN (R 4.2.0)
## backports 1.4.1 2021-12-13 [2] CRAN (R 4.2.0)
## base64enc 0.1-3 2015-07-28 [2] CRAN (R 4.2.0)
## bayesplot * 1.10.0 2022-11-16 [2] CRAN (R 4.2.0)
## bayestestR * 0.13.1 2023-04-07 [2] CRAN (R 4.2.0)
## boot 1.3-28.1 2022-11-22 [2] CRAN (R 4.2.0)
## bridgesampling 1.1-2 2021-04-16 [2] CRAN (R 4.2.0)
## brms * 2.18.0 2022-09-19 [2] CRAN (R 4.2.0)
## Brobdingnag 1.2-9 2022-10-19 [2] CRAN (R 4.2.0)
## broom 1.0.1 2022-08-29 [2] CRAN (R 4.2.0)
## bslib 0.4.1 2022-11-02 [2] CRAN (R 4.2.0)
## cachem 1.0.6 2021-08-19 [2] CRAN (R 4.2.0)
## callr 3.7.3 2022-11-02 [2] CRAN (R 4.2.0)
## cellranger 1.1.0 2016-07-27 [2] CRAN (R 4.2.0)
## checkmate 2.1.0 2022-04-21 [2] CRAN (R 4.2.0)
## cli 3.6.2 2023-12-11 [2] CRAN (R 4.2.0)
## coda 0.19-4 2020-09-30 [2] CRAN (R 4.2.0)
## codetools 0.2-18 2020-11-04 [2] CRAN (R 4.2.2)
## colorspace 2.0-3 2022-02-21 [2] CRAN (R 4.2.0)
## colourpicker 1.2.0 2022-10-28 [2] CRAN (R 4.2.0)
## correlation * 0.8.4 2023-04-06 [2] CRAN (R 4.2.0)
## corrplot * 0.92 2021-11-18 [2] CRAN (R 4.2.0)
## crayon 1.5.2 2022-09-29 [2] CRAN (R 4.2.0)
## credentials 1.3.2 2021-11-29 [2] CRAN (R 4.2.0)
## crosstalk 1.2.0 2021-11-04 [2] CRAN (R 4.2.0)
## datawizard * 1.0.0 2025-01-10 [1] CRAN (R 4.2.2)
## DBI 1.1.3 2022-06-18 [2] CRAN (R 4.2.0)
## dbplyr 2.2.1 2022-06-27 [2] CRAN (R 4.2.0)
## digest 0.6.30 2022-10-18 [2] CRAN (R 4.2.0)
## distributional 0.3.1 2022-09-02 [2] CRAN (R 4.2.0)
## dplyr * 1.0.10 2022-09-01 [2] CRAN (R 4.2.0)
## DT 0.26 2022-10-19 [2] CRAN (R 4.2.0)
## dygraphs 1.1.1.6 2018-07-11 [2] CRAN (R 4.2.0)
## easystats * 0.7.0 2023-11-05 [2] CRAN (R 4.2.2)
## effectsize * 0.8.6 2023-09-14 [2] CRAN (R 4.2.0)
## ellipsis 0.3.2 2021-04-29 [2] CRAN (R 4.2.0)
## emmeans * 1.8.3 2022-12-06 [2] CRAN (R 4.2.0)
## estimability 1.4.1 2022-08-05 [2] CRAN (R 4.2.0)
## evaluate 0.18 2022-11-07 [2] CRAN (R 4.2.0)
## fansi 1.0.3 2022-03-24 [2] CRAN (R 4.2.0)
## farver 2.1.1 2022-07-06 [2] CRAN (R 4.2.0)
## fastmap 1.1.0 2021-01-25 [2] CRAN (R 4.2.0)
## forcats * 0.5.2 2022-08-19 [2] CRAN (R 4.2.0)
## fs 1.5.2 2021-12-08 [2] CRAN (R 4.2.0)
## gamm4 0.2-6 2020-04-03 [2] CRAN (R 4.2.0)
## gargle 1.2.1 2022-09-08 [2] CRAN (R 4.2.0)
## generics 0.1.3 2022-07-05 [2] CRAN (R 4.2.0)
## gert 1.9.2 2022-12-05 [2] CRAN (R 4.2.0)
## GGally 2.2.0 2023-11-22 [2] CRAN (R 4.2.0)
## ggmcmc * 1.5.1.1 2021-02-10 [2] CRAN (R 4.2.0)
## ggplot2 * 3.4.4 2023-10-12 [2] CRAN (R 4.2.0)
## ggstats 0.5.1 2023-11-21 [2] CRAN (R 4.2.0)
## gh 1.3.1 2022-09-08 [2] CRAN (R 4.2.0)
## glue 1.6.2 2022-02-24 [2] CRAN (R 4.2.0)
## googledrive 2.0.0 2021-07-08 [2] CRAN (R 4.2.0)
## googlesheets4 1.0.1 2022-08-13 [2] CRAN (R 4.2.0)
## gridExtra 2.3 2017-09-09 [2] CRAN (R 4.2.0)
## gtable 0.3.1 2022-09-01 [2] CRAN (R 4.2.0)
## gtools 3.9.4 2022-11-27 [2] CRAN (R 4.2.0)
## haven 2.5.1 2022-08-22 [2] CRAN (R 4.2.0)
## here * 1.0.1 2020-12-13 [2] CRAN (R 4.2.0)
## highr 0.11 2024-05-26 [1] CRAN (R 4.2.2)
## hms 1.1.2 2022-08-19 [2] CRAN (R 4.2.0)
## htmltools 0.5.4 2022-12-07 [2] CRAN (R 4.2.0)
## htmlwidgets 1.5.4 2021-09-08 [2] CRAN (R 4.2.0)
## httpuv 1.6.6 2022-09-08 [2] CRAN (R 4.2.0)
## httr 1.4.4 2022-08-17 [2] CRAN (R 4.2.0)
## igraph 1.3.5 2022-09-22 [2] CRAN (R 4.2.0)
## inline 0.3.19 2021-05-31 [2] CRAN (R 4.2.0)
## insight * 1.0.2 2025-02-06 [1] CRAN (R 4.2.2)
## jquerylib 0.1.4 2021-04-26 [2] CRAN (R 4.2.0)
## jsonlite 1.8.4 2022-12-06 [2] CRAN (R 4.2.0)
## knitr 1.41 2022-11-18 [2] CRAN (R 4.2.0)
## labeling 0.4.2 2020-10-20 [2] CRAN (R 4.2.0)
## later 1.3.0 2021-08-18 [2] CRAN (R 4.2.0)
## lattice 0.20-45 2021-09-22 [2] CRAN (R 4.2.2)
## lifecycle 1.0.3 2022-10-07 [2] CRAN (R 4.2.0)
## lme4 1.1-31 2022-11-01 [2] CRAN (R 4.2.0)
## logspline * 2.1.21 2023-10-26 [2] CRAN (R 4.2.0)
## loo 2.5.1 2022-03-24 [2] CRAN (R 4.2.0)
## lubridate 1.9.0 2022-11-06 [2] CRAN (R 4.2.0)
## magrittr 2.0.3 2022-03-30 [2] CRAN (R 4.2.0)
## markdown 1.4 2022-11-16 [2] CRAN (R 4.2.0)
## MASS 7.3-58.1 2022-08-03 [2] CRAN (R 4.2.2)
## Matrix 1.5-3 2022-11-11 [2] CRAN (R 4.2.0)
## matrixStats 0.63.0 2022-11-18 [2] CRAN (R 4.2.0)
## mgcv 1.8-41 2022-10-21 [2] CRAN (R 4.2.2)
## mime 0.12 2021-09-28 [2] CRAN (R 4.2.0)
## miniUI 0.1.1.1 2018-05-18 [2] CRAN (R 4.2.0)
## minqa 1.2.5 2022-10-19 [2] CRAN (R 4.2.0)
## modelbased * 0.8.6 2023-01-13 [2] CRAN (R 4.2.0)
## modelr 0.1.10 2022-11-11 [2] CRAN (R 4.2.0)
## multcomp 1.4-20 2022-08-07 [2] CRAN (R 4.2.0)
## munsell 0.5.0 2018-06-12 [2] CRAN (R 4.2.0)
## mvtnorm 1.1-3 2021-10-08 [2] CRAN (R 4.2.0)
## nlme 3.1-160 2022-10-10 [2] CRAN (R 4.2.2)
## nloptr 2.0.3 2022-05-26 [2] CRAN (R 4.2.0)
## openssl 2.0.5 2022-12-06 [2] CRAN (R 4.2.0)
## papaja * 0.1.1 2022-07-05 [2] CRAN (R 4.2.0)
## parameters * 0.21.4 2024-02-04 [2] CRAN (R 4.2.2)
## performance * 0.10.8 2023-10-30 [2] CRAN (R 4.2.0)
## pillar 1.8.1 2022-08-19 [2] CRAN (R 4.2.0)
## pkgbuild 1.4.0 2022-11-27 [2] CRAN (R 4.2.0)
## pkgconfig 2.0.3 2019-09-22 [2] CRAN (R 4.2.0)
## plyr 1.8.8 2022-11-11 [2] CRAN (R 4.2.0)
## posterior 1.3.1 2022-09-06 [2] CRAN (R 4.2.0)
## prereg 0.6.0 2022-01-20 [2] CRAN (R 4.2.0)
## prettyunits 1.1.1 2020-01-24 [2] CRAN (R 4.2.0)
## processx 3.8.0 2022-10-26 [2] CRAN (R 4.2.0)
## projpred 2.2.2 2022-11-09 [2] CRAN (R 4.2.0)
## promises 1.2.0.1 2021-02-11 [2] CRAN (R 4.2.0)
## ps 1.7.2 2022-10-26 [2] CRAN (R 4.2.0)
## purrr * 1.0.2 2023-08-10 [2] CRAN (R 4.2.0)
## R6 2.5.1 2021-08-19 [2] CRAN (R 4.2.0)
## ranger 0.14.1 2022-06-18 [2] CRAN (R 4.2.0)
## RColorBrewer * 1.1-3 2022-04-03 [2] CRAN (R 4.2.0)
## Rcpp * 1.0.9 2022-07-08 [2] CRAN (R 4.2.0)
## RcppParallel 5.1.5 2022-01-05 [2] CRAN (R 4.2.0)
## readr * 2.1.3 2022-10-01 [2] CRAN (R 4.2.0)
## readxl 1.4.1 2022-08-17 [2] CRAN (R 4.2.0)
## renv 0.16.0 2022-09-29 [2] CRAN (R 4.2.0)
## report * 0.5.8 2023-12-07 [2] CRAN (R 4.2.2)
## reprex 2.0.2 2022-08-17 [2] CRAN (R 4.2.0)
## reshape2 1.4.4 2020-04-09 [2] CRAN (R 4.2.0)
## rlang 1.1.2 2023-11-04 [2] CRAN (R 4.2.0)
## rmarkdown 2.18 2022-11-09 [2] CRAN (R 4.2.0)
## rprojroot 2.0.3 2022-04-02 [2] CRAN (R 4.2.0)
## rstan 2.21.7 2022-09-08 [2] CRAN (R 4.2.0)
## rstantools 2.2.0 2022-04-08 [2] CRAN (R 4.2.0)
## rstudioapi 0.14 2022-08-22 [2] CRAN (R 4.2.0)
## rticles 0.24 2022-08-25 [2] CRAN (R 4.2.0)
## rvest 1.0.3 2022-08-19 [2] CRAN (R 4.2.0)
## sandwich 3.0-2 2022-06-15 [2] CRAN (R 4.2.0)
## sass 0.4.4 2022-11-24 [2] CRAN (R 4.2.0)
## scales 1.2.1 2022-08-20 [2] CRAN (R 4.2.0)
## see * 0.8.1 2023-11-03 [2] CRAN (R 4.2.2)
## sessioninfo * 1.2.2 2021-12-06 [1] CRAN (R 4.2.0)
## shiny 1.7.3 2022-10-25 [2] CRAN (R 4.2.0)
## shinyjs 2.1.0 2021-12-23 [2] CRAN (R 4.2.0)
## shinystan 2.6.0 2022-03-03 [2] CRAN (R 4.2.0)
## shinythemes 1.2.0 2021-01-25 [2] CRAN (R 4.2.0)
## StanHeaders 2.21.0-7 2020-12-17 [2] CRAN (R 4.2.0)
## stringi 1.7.8 2022-07-11 [2] CRAN (R 4.2.0)
## stringr * 1.5.0 2022-12-02 [2] CRAN (R 4.2.0)
## survival 3.4-0 2022-08-09 [2] CRAN (R 4.2.2)
## sys 3.4.1 2022-10-18 [2] CRAN (R 4.2.0)
## tensorA 0.36.2 2020-11-19 [2] CRAN (R 4.2.0)
## TH.data 1.1-1 2022-04-26 [2] CRAN (R 4.2.0)
## threejs 0.3.3 2020-01-21 [2] CRAN (R 4.2.0)
## tibble * 3.1.8 2022-07-22 [2] CRAN (R 4.2.0)
## tidyr * 1.3.0 2023-01-24 [2] CRAN (R 4.2.0)
## tidyselect 1.2.0 2022-10-10 [2] CRAN (R 4.2.0)
## tidyverse * 1.3.2 2022-07-18 [2] CRAN (R 4.2.0)
## timechange 0.1.1 2022-11-04 [2] CRAN (R 4.2.0)
## tinylabels * 0.2.3 2022-02-06 [2] CRAN (R 4.2.0)
## tinytex 0.43 2022-12-13 [2] CRAN (R 4.2.2)
## tzdb 0.3.0 2022-03-28 [2] CRAN (R 4.2.0)
## usethis 2.1.6 2022-05-25 [2] CRAN (R 4.2.0)
## utf8 1.2.2 2021-07-24 [2] CRAN (R 4.2.0)
## vctrs 0.6.5 2023-12-01 [2] CRAN (R 4.2.0)
## withr 2.5.0 2022-03-03 [2] CRAN (R 4.2.0)
## worcs * 0.1.10 2022-07-20 [2] CRAN (R 4.2.0)
## xfun 0.35 2022-11-16 [2] CRAN (R 4.2.0)
## xml2 1.3.3 2021-11-30 [2] CRAN (R 4.2.0)
## xtable 1.8-4 2019-04-21 [2] CRAN (R 4.2.0)
## xts 0.12.2 2022-10-16 [2] CRAN (R 4.2.0)
## yaml 2.3.6 2022-10-18 [2] CRAN (R 4.2.0)
## zoo 1.8-11 2022-09-17 [2] CRAN (R 4.2.0)
##
## [1] /Users/giselagovaart/Library/R/x86_64/4.2/library
## [2] /Library/Frameworks/R.framework/Versions/4.2/Resources/library
##
## ──────────────────────────────────────────────────────────────────────────────
cite_packages()
## - Aust F, Barth M (2022). _papaja: Prepare reproducible APA journal articles with R Markdown_. R package version 0.1.1, <https://github.com/crsh/papaja>.
## - Barth M (2022). _tinylabels: Lightweight Variable Labels_. R package version 0.2.3, <https://cran.r-project.org/package=tinylabels>.
## - Ben-Shachar MS, Lüdecke D, Makowski D (2020). "effectsize: Estimation of Effect Size Indices and Standardized Parameters." _Journal of Open Source Software_, *5*(56), 2815. doi:10.21105/joss.02815 <https://doi.org/10.21105/joss.02815>, <https://doi.org/10.21105/joss.02815>.
## - Bürkner P (2017). "brms: An R Package for Bayesian Multilevel Models Using Stan." _Journal of Statistical Software_, *80*(1), 1-28. doi:10.18637/jss.v080.i01 <https://doi.org/10.18637/jss.v080.i01>. Bürkner P (2018). "Advanced Bayesian Multilevel Modeling with the R Package brms." _The R Journal_, *10*(1), 395-411. doi:10.32614/RJ-2018-017 <https://doi.org/10.32614/RJ-2018-017>. Bürkner P (2021). "Bayesian Item Response Modeling in R with brms and Stan." _Journal of Statistical Software_, *100*(5), 1-54. doi:10.18637/jss.v100.i05 <https://doi.org/10.18637/jss.v100.i05>.
## - Eddelbuettel D, François R (2011). "Rcpp: Seamless R and C++ Integration." _Journal of Statistical Software_, *40*(8), 1-18. doi:10.18637/jss.v040.i08 <https://doi.org/10.18637/jss.v040.i08>. Eddelbuettel D (2013). _Seamless R and C++ Integration with Rcpp_. Springer, New York. doi:10.1007/978-1-4614-6868-4 <https://doi.org/10.1007/978-1-4614-6868-4>, ISBN 978-1-4614-6867-7. Eddelbuettel D, Balamuta JJ (2018). "Extending extitR with extitC++: A Brief Introduction to extitRcpp." _The American Statistician_, *72*(1), 28-36. doi:10.1080/00031305.2017.1375990 <https://doi.org/10.1080/00031305.2017.1375990>.
## - Fernández-i-Mar\'in X (2016). "ggmcmc: Analysis of MCMC Samples and Bayesian Inference." _Journal of Statistical Software_, *70*(9), 1-20. doi:10.18637/jss.v070.i09 <https://doi.org/10.18637/jss.v070.i09>.
## - Gabry J, Mahr T (2022). "bayesplot: Plotting for Bayesian Models." R package version 1.10.0, <https://mc-stan.org/bayesplot/>. Gabry J, Simpson D, Vehtari A, Betancourt M, Gelman A (2019). "Visualization in Bayesian workflow." _J. R. Stat. Soc. A_, *182*, 389-402. doi:10.1111/rssa.12378 <https://doi.org/10.1111/rssa.12378>.
## - Kooperberg C (2023). _logspline: Routines for Logspline Density Estimation_. R package version 2.1.21, <https://CRAN.R-project.org/package=logspline>.
## - Lenth R (2022). _emmeans: Estimated Marginal Means, aka Least-Squares Means_. R package version 1.8.3, <https://CRAN.R-project.org/package=emmeans>.
## - Lüdecke D, Ben-Shachar M, Patil I, Makowski D (2020). "Extracting, Computing and Exploring the Parameters of Statistical Models using R." _Journal of Open Source Software_, *5*(53), 2445. doi:10.21105/joss.02445 <https://doi.org/10.21105/joss.02445>.
## - Lüdecke D, Ben-Shachar M, Patil I, Waggoner P, Makowski D (2021). "performance: An R Package for Assessment, Comparison and Testing of Statistical Models." _Journal of Open Source Software_, *6*(60), 3139. doi:10.21105/joss.03139 <https://doi.org/10.21105/joss.03139>.
## - Lüdecke D, Ben-Shachar M, Patil I, Wiernik B, Bacher E, Thériault R, Makowski D (2022). "easystats: Framework for Easy Statistical Modeling, Visualization, and Reporting." _CRAN_. R package, <https://easystats.github.io/easystats/>.
## - Lüdecke D, Patil I, Ben-Shachar M, Wiernik B, Waggoner P, Makowski D (2021). "see: An R Package for Visualizing Statistical Models." _Journal of Open Source Software_, *6*(64), 3393. doi:10.21105/joss.03393 <https://doi.org/10.21105/joss.03393>.
## - Lüdecke D, Waggoner P, Makowski D (2019). "insight: A Unified Interface to Access Information from Model Objects in R." _Journal of Open Source Software_, *4*(38), 1412. doi:10.21105/joss.01412 <https://doi.org/10.21105/joss.01412>.
## - Makowski D, Ben-Shachar M, Lüdecke D (2019). "bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework." _Journal of Open Source Software_, *4*(40), 1541. doi:10.21105/joss.01541 <https://doi.org/10.21105/joss.01541>, <https://joss.theoj.org/papers/10.21105/joss.01541>.
## - Makowski D, Ben-Shachar M, Patil I, Lüdecke D (2020). "Estimation of Model-Based Predictions, Contrasts and Means." _CRAN_. <https://github.com/easystats/modelbased>.
## - Makowski D, Lüdecke D, Patil I, Thériault R, Ben-Shachar M, Wiernik B (2023). "Automated Results Reporting as a Practical Tool to Improve Reproducibility and Methodological Best Practices Adoption." _CRAN_. <https://easystats.github.io/report/>.
## - Makowski D, Wiernik B, Patil I, Lüdecke D, Ben-Shachar M (2022). "correlation: Methods for Correlation Analysis." Version 0.8.3, <https://CRAN.R-project.org/package=correlation>. Makowski D, Ben-Shachar M, Patil I, Lüdecke D (2020). "Methods and Algorithms for Correlation Analysis in R." _Journal of Open Source Software_, *5*(51), 2306. doi:10.21105/joss.02306 <https://doi.org/10.21105/joss.02306>, <https://joss.theoj.org/papers/10.21105/joss.02306>.
## - Müller K (2020). _here: A Simpler Way to Find Your Files_. R package version 1.0.1, <https://CRAN.R-project.org/package=here>.
## - Müller K, Wickham H (2022). _tibble: Simple Data Frames_. R package version 3.1.8, <https://CRAN.R-project.org/package=tibble>.
## - Neuwirth E (2022). _RColorBrewer: ColorBrewer Palettes_. R package version 1.1-3, <https://CRAN.R-project.org/package=RColorBrewer>.
## - Patil I, Makowski D, Ben-Shachar M, Wiernik B, Bacher E, Lüdecke D (2022). "datawizard: An R Package for Easy Data Preparation and Statistical Transformations." _Journal of Open Source Software_, *7*(78), 4684. doi:10.21105/joss.04684 <https://doi.org/10.21105/joss.04684>.
## - R Core Team (2022). _R: A Language and Environment for Statistical Computing_. R Foundation for Statistical Computing, Vienna, Austria. <https://www.R-project.org/>.
## - Van Lissa CJ, Peikert A, Brandmaier AM (2022). _worcs: Workflow for Open Reproducible Code in Science_. R package version 0.1.10, <https://CRAN.R-project.org/package=worcs>.
## - Wei T, Simko V (2021). _R package 'corrplot': Visualization of a Correlation Matrix_. (Version 0.92), <https://github.com/taiyun/corrplot>.
## - Wickham H (2016). _ggplot2: Elegant Graphics for Data Analysis_. Springer-Verlag New York. ISBN 978-3-319-24277-4, <https://ggplot2.tidyverse.org>.
## - Wickham H (2022). _forcats: Tools for Working with Categorical Variables (Factors)_. R package version 0.5.2, <https://CRAN.R-project.org/package=forcats>.
## - Wickham H (2022). _stringr: Simple, Consistent Wrappers for Common String Operations_. R package version 1.5.0, <https://CRAN.R-project.org/package=stringr>.
## - Wickham H, Averick M, Bryan J, Chang W, McGowan LD, François R, Grolemund G, Hayes A, Henry L, Hester J, Kuhn M, Pedersen TL, Miller E, Bache SM, Müller K, Ooms J, Robinson D, Seidel DP, Spinu V, Takahashi K, Vaughan D, Wilke C, Woo K, Yutani H (2019). "Welcome to the tidyverse." _Journal of Open Source Software_, *4*(43), 1686. doi:10.21105/joss.01686 <https://doi.org/10.21105/joss.01686>.
## - Wickham H, Chang W, Flight R, Müller K, Hester J (2021). _sessioninfo: R Session Information_. R package version 1.2.2, <https://CRAN.R-project.org/package=sessioninfo>.
## - Wickham H, François R, Henry L, Müller K (2022). _dplyr: A Grammar of Data Manipulation_. R package version 1.0.10, <https://CRAN.R-project.org/package=dplyr>.
## - Wickham H, Henry L (2023). _purrr: Functional Programming Tools_. R package version 1.0.2, <https://CRAN.R-project.org/package=purrr>.
## - Wickham H, Hester J, Bryan J (2022). _readr: Read Rectangular Text Data_. R package version 2.1.3, <https://CRAN.R-project.org/package=readr>.
## - Wickham H, Vaughan D, Girlich M (2023). _tidyr: Tidy Messy Data_. R package version 1.3.0, <https://CRAN.R-project.org/package=tidyr>.